Goto

Collaborating Authors

 ct 1


DeepExplicitDurationSwitchingModels forTimeSeries

Neural Information Processing Systems

Time series forecasting plays akeyrole in informing industrial and business decisions [17,24,8], while segmentation isuseful forunderstanding biological andphysicalsystems [40,45,34].


NeuralActiveLearningwithPerformance Guarantees

Neural Information Processing Systems

We investigate the problem of active learning in the streaming setting in nonparametric regimes, where thelabels arestochastically generated from aclass of functions on which we make no assumptions whatsoever.



Fast White-Box Adversarial Streaming Without a Random Oracle

Feng, Ying, Jain, Aayush, Woodruff, David P.

arXiv.org Artificial Intelligence

Recently, the question of adversarially robust streaming, where the stream is allowed to depend on the randomness of the streaming algorithm, has gained a lot of attention. In this work, we consider a strong white-box adversarial model (Ajtai et al. PODS 2022), in which the adversary has access to all past random coins and the parameters used by the streaming algorithm. We focus on the sparse recovery problem and extend our result to other tasks such as distinct element estimation and low-rank approximation of matrices and tensors. The main drawback of previous work is that it requires a random oracle, which is especially problematic in the streaming model since the amount of randomness is counted in the space complexity of a streaming algorithm. Also, the previous work suffers from large update time. We construct a near-optimal solution for the sparse recovery problem in white-box adversarial streams, based on the subexponentially secure Learning with Errors assumption. Importantly, our solution does not require a random oracle and has a polylogarithmic per item processing time. We also give results in a related white-box adversarially robust distributed model. Our constructions are based on homomorphic encryption schemes satisfying very mild structural properties that are currently satisfied by most known schemes.


Securer and Faster Privacy-Preserving Distributed Machine Learning

Wang, Hongxiao, Jiang, Zoe L., Zhao, Yanmin, Yiu, Siu-Ming, Yang, Peng, Tan, Zejiu, Jin, Bohan, Xu, Shiyuan, Pan, Shimin

arXiv.org Artificial Intelligence

With the development of machine learning, it is difficult for a single server to process all the data. So machine learning tasks need to be spread across multiple servers, turning centralized machine learning into a distributed one. However, privacy remains an unsolved problem in distributed machine learning. Multi-key homomorphic encryption over torus (MKTFHE) is one of the suitable candidates to solve the problem. However, there may be security risks in the decryption of MKTFHE and the most recent result about MKFHE only supports the Boolean operation and linear operation. So, MKTFHE cannot compute the non-linear function like Sigmoid directly and it is still hard to perform common machine learning such as logistic regression and neural networks in high performance. This paper first introduces secret sharing to propose a new distributed decryption protocol for MKTFHE, then designs an MKTFHE-friendly activation function, and finally utilizes them to implement logistic regression and neural network training in MKTFHE. We prove the correctness and security of our decryption protocol and compare the efficiency and accuracy between using Taylor polynomials of Sigmoid and our proposed function as an activation function. The experiments show that the efficiency of our function is 10 times higher than using 7-order Taylor polynomials straightly and the accuracy of the training model is similar to that of using a high-order polynomial as an activation function scheme.


Asymptotic Supervised Predictive Classifiers under Partition Exchangeability

Amiryousefi, Ali

arXiv.org Machine Learning

The convergence of simultaneous and marginal predictive classifiers under partition exchangeability in supervised classification is obtained. The result shows the asymptotic convergence of these classifiers under infinite amount of training or test data, such that after observing umpteen amount of data, the differences between these classifiers would be negligible. This is an important result from the practical perspective as under the presence of sufficiently large amount of data, one can replace the simpler marginal classifier with computationally more expensive simultaneous one.


Echo-State Conditional Restricted Boltzmann Machines

Chatzis, Sotirios (Cyprus University of Technology)

AAAI Conferences

Restricted Boltzmann machines (RBMs) are a powerful generative modeling technique, based on a complex graphical model of hidden (latent) variables. Conditional RBMs (CRBMs) are an extension of RBMs tailored to modeling temporal data. A drawback of CRBMs is their consideration of linear temporal dependencies, which limits their capability to capture complex temporal structure. They also require many variables to model long temporal dependencies, a fact that might provoke overfitting proneness. To resolve these issues, in this paper we propose the echo-state CRBM (ES-CRBM): our model uses an echo-state network reservoir in the context of CRBMs to efficiently capture long and complex temporal dynamics, with much fewer trainable parameters compared to conventional CRBMs. In addition, we introduce an (implicit) mixture of ES-CRBM experts (im-ES-CRBM) to enhance even further the capabilities of our ES-CRBM model. The introduced im-ES-CRBM allows for better modeling temporal observations which might comprise a number of latent or observable subpatterns that alternate in a dynamic fashion. It also allows for performing sequence segmentation using our framework. We apply our methods to sequential data modeling and classification experiments using public datasets. As we show, our approach outperforms both existing RBM-based approaches as well as related state-of-the-art methods, such as conditional random fields.


A First-Order Formalization of Commitments and Goals for Planning

Meneguzzi, Felipe (Pontifical Catholic University of Rio Grande do Sul) | Telang, Pankaj R. (North Carolina State University) | Singh, Munindar P. (North Carolina State University)

AAAI Conferences

Commitments help model interactions in multiagent systems in a computationally realizable yet high-level manner without compromising the autonomy and heterogeneity of the member agents. Recent work shows how to combine commitments with goals and apply planning methods to enable agents to determine their actions. However, previous approaches to modeling commitments are confined to propositional representations, which limits their applicability in practical cases. We propose a first-order representation and reasoning technique that accommodates templatic commitments and goals that may be applied repeatedly with differing bindings for domain objects. Doing so not only leads to a more perspicuous modeling, but also supports many practical patterns.


Scalable Distributed Monte-Carlo Tree Search

Yoshizoe, Kazuki (The University of Tokyo) | Kishimoto, Akihiro (Tokyo Institute of Technology and Japan Science and Technology Agency) | Kaneko, Tomoyuki (The University of Tokyo) | Yoshimoto, Haruhiro (The University of Tokyo) | Ishikawa, Yutaka (The University of Tokyo)

AAAI Conferences

Monte-Carlo Tree Search (MCTS) is remarkably successful in two-player games, but parallelizing MCTS has been notoriously difficult to scale well, especially in distributed environments. For a distributed parallel search, transposition-table driven scheduling (TDS) is known to be efficient in several domains. We present a massively parallel MCTS algorithm, that applies the TDS parallelism to the Upper Confidence bound Applied to Trees (UCT) algorithm, which is the most representative MCTS algorithm. To drastically decrease communication overhead, we introduce a reformulation of UCT called Depth-First UCT. The parallel performance of the algorithm is evaluated on clusters using up to 1,200 cores in artificial game-trees. We show that this approach scales well, achieving 740-fold speedups in the best case.